1,250 research outputs found

    Learning a Pose Lexicon for Semantic Action Recognition

    Get PDF
    This paper presents a novel method for learning a pose lexicon comprising semantic poses defined by textual instructions and their associated visual poses defined by visual features. The proposed method simultaneously takes two input streams, semantic poses and visual pose candidates, and statistically learns a mapping between them to construct the lexicon. With the learned lexicon, action recognition can be cast as the problem of finding the maximum translation probability of a sequence of semantic poses given a stream of visual pose candidates. Experiments evaluating pre-trained and zero-shot action recognition conducted on MSRC-12 gesture and WorkoutSu-10 exercise datasets were used to verify the efficacy of the proposed method.Comment: Accepted by the 2016 IEEE International Conference on Multimedia and Expo (ICME 2016). 6 pages paper and 4 pages supplementary materia

    EMIR: A novel emotion-based music retrieval system

    Get PDF
    Music is inherently expressive of emotion meaning and affects the mood of people. In this paper, we present a novel EMIR (Emotional Music Information Retrieval) System that uses latent emotion elements both in music and non-descriptive queries (NDQs) to detect implicit emotional association between users and music to enhance Music Information Retrieval (MIR). We try to understand the latent emotional intent of queries via machine learning for emotion classification and compare the performance of emotion detection approaches on different feature sets. For this purpose, we extract music emotion features from lyrics and social tags crawled from the Internet, label some for training and model them in high-dimensional emotion space and recognize latent emotion of users by query emotion analysis. The similarity between queries and music is computed by verified BM25 model

    MemoryMesh – Lifelogs as densely linked hypermedia

    Get PDF

    A survey on life logging data capturing

    Get PDF
    With the recent availability of inexpensive wearable sensing technologies, the emergence and of both off-line and on-line digital-storage capacity and an acceptance of personal data gathering and online social sharing (timeline), life logging has become a mainstream research topic and is being embraced by early adaptors. For example, currently we have the ability to gather and store large volumes of personal data (location, photos, motion, orientation, etc.) in a very cheap manner, using an inexpensive smartphone. However, with many available lifelogging tools, the question of which ones to use has not been seriously addressed in literature. In this work, we report on a survey of various approaches to capturing lifelog data, which includes the SenseCam/Vicon Revue, wearable smartphones, wearable video cameras, location loggers using GPS, bluetooth device loggers, human body biological state monitors (temperature/heart rate etc.) and so on. We compare these devices and analyze the advantages and disadvantages of different capture methods, including the consistency and integrity of capture, the ‘life coverage’ of the captured data, as well as people’s attitude and feeling to these data capture devices, which we do through user studies and surveys. To complete this work, we provide our opinion of the most suitable model of data capture for personal life logging in a variety of domains of use

    ShareDay:A memory enhancing lifelogging system based on group sharing

    Get PDF
    Lifelogging is the automatic capture of daily activities using environmental and wearable sensors such as MobilePhone/SenseCam. Lifelogging produces enormous data collections that present many organization and retrieval challenges, including semantic analysis, visualization and motivating users of different ages and technology experience to lifelog. In this paper, we present a new generation of lifelogging system to support reminiscence through incorporating event segmentation and group sharing

    ZhiWo: Activity tagging and recognizing system from personal lifelogs

    Get PDF
    With the increasing use of mobile devices as personal record- ing, communication and sensing tools, extracting the seman- tics of life activities through sensed data (photos, accelerom- eter, GPS etc.) is gaining widespread public awareness. A person who engages in long-term personal sensing is engag- ing in a process of lifelogging. Lifelogging typically involves using a range of (wearable) sensors to capture raw data, to segment into discrete activities, to annotate and subse- quently to make accessible by search or browsing tools. In this paper, we present an intuitive lifelog activity record- ing and management system called ZhiWo. By using a su- pervised machine learning approach, sensed data collected by mobile devices are automatically classified into different types of daily human activities and these activities are inter- preted as life activity retrieval units for personal archives
    corecore